A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems

نویسندگان

چکیده

In this paper, we propose a variance-reduced primal-dual algorithm with Bregman distance functions for solving convex-concave saddle-point problems finite-sum structure and nonbilinear coupling function. This type of problem typically arises in machine learning game theory. Based on some standard assumptions, the is proved to converge oracle complexities $${\mathcal {O}}(\frac{\sqrt{n}}{\epsilon })$$ {O}}(\frac{n}{\sqrt{\epsilon }}+\frac{1}{\epsilon ^{1.5}})$$ using constant non-constant parameters, respectively where n number function components. Compared existing methods, our framework yields significant improvement over required gradient samples achieve $$\epsilon $$ -accuracy gap. We also present numerical experiments showcase superior performance method compared state-of-the-art methods.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Primal-dual Hybrid Gradient Methods for Saddle-point Problems

The Primal-Dual hybrid gradient (PDHG) method is a powerful optimization scheme that breaks complex problems into simple sub-steps. Unfortunately, PDHG methods require the user to choose stepsize parameters, and the speed of convergence is highly sensitive to this choice. We introduce new adaptive PDHG schemes that automatically tune the stepsize parameters for fast convergence without user inp...

متن کامل

Stochastic Variance Reduction Methods for Saddle-Point Problems

We consider convex-concave saddle-point problems where the objective functionsmay be split in many components, and extend recent stochastic variance reductionmethods (such as SVRG or SAGA) to provide the first large-scale linearly conver-gent algorithms for this class of problems which are common in machine learning.While the algorithmic extension is straightforward, it come...

متن کامل

Adaptive Stochastic Primal-Dual Coordinate Descent for Separable Saddle Point Problems

We consider a generic convex-concave saddle point problem with a separable structure, a form that covers a wide-ranged machine learning applications. Under this problem structure, we follow the framework of primal-dual updates for saddle point problems, and incorporate stochastic block coordinate descent with adaptive stepsizes into this framework. We theoretically show that our proposal of ada...

متن کامل

Doubly Stochastic Primal-Dual Coordinate Method for Bilinear Saddle-Point Problem

We propose a doubly stochastic primal-dual coordinate optimization algorithm for empirical risk minimization, which can be formulated as a bilinear saddle-point problem. In each iteration, our method randomly samples a block of coordinates of the primal and dual solutions to update. The linear convergence of our method could be established in terms of 1) the distance from the current iterate to...

متن کامل

A primal-dual algorithm framework for convex saddle-point optimization

In this study, we introduce a primal-dual prediction-correction algorithm framework for convex optimization problems with known saddle-point structure. Our unified frame adds the proximal term with a positive definite weighting matrix. Moreover, different proximal parameters in the frame can derive some existing well-known algorithms and yield a class of new primal-dual schemes. We prove the co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2023

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-023-00472-5